Search Results: "pascal"

5 October 2014

Stefano Zacchiroli: je code

je.code(); promoting programming (in French) jecode.org is a nice initiative by, among others, my fellow Debian developer and university professor Martin Quinson. The goal of jecode.org is to raise awareness about the importance of learning the basics of programming, for everyone in modern societies. jecode.org targets specifically francophone children (hence the name, for "I code"). I've been happy to contribute to the initiative with my thoughts on why learning to program is so important today, joining the happy bunch of "codeurs" on the web site. If you read French, you can find them reposted below. If you also write French, you might want to contribute your thoughts on the matter. How? By forking the project of course!
Pourquoi codes-tu ? Tout d'abord, je code parce que c'est une activit passionnante, dr le, et qui permet de prouver le plaisir de cr er. Deuxi mement, je code pour automatiser les taches r p titives qui peuvent rendre p nibles nos vies num riques. Un ordinateur est con u exactement pour cela: lib rer les tres humains des taches stupides, pour leur permettre de se concentrer sur les taches qui ont besoin de l'intelligence humaine pour tre r solues. Mais je code aussi pour le pur plaisir du hacking, i.e., trouver des utilisations originelles et inattendues pour des logiciels existants. Comment as-tu appris ? Compl tement au hasard, quand j' tais gamin. 7 ou 8 ans, je suis tomb dans la biblioth que municipale de mon petit village, sur un livre qui enseignait programmer en BASIC travers la m taphore du jeu de l'oie. partir de ce jour j'ai utilis le Commodore 64 de mon p re beaucoup plus pour programmer que pour les jeux vid o: coder est tellement plus dr le! Plus tard, au lyc e, j'ai pu appr cier la programmation structur e et les avantages normes qu'elle apporte par rapport aux GO TO du BASIC et je suis devenu un accro du Pascal. Le reste est venu avec l'universit et la d couverte du Logiciel Libre: la caverne d'Ali Baba du codeur curieux. Quel est ton langage pr f r ? J'ai plusieurs langages pr f r s. J'aime Python pour son minimalisme syntactique, sa communaut vaste et bien organis e, et pour l'abondance des outils et ressources dont il dispose. J'utilise Python pour le d veloppement d'infrastructures (souvent quip es d'interfaces Web) de taille moyenne/grande, surtout si j'ai envie des cr er une communaut de contributeurs autour du logiciel. J'aime OCaml pour son syst me de types et sa capacit de capturer les bonnes propri t s des applications complexes. Cela permet au compilateur d'aider norm ment les d veloppeur viter des erreurs de codage comme de conception. J'utilise aussi beaucoup Perl et le shell script (principalement Bash) pour l'automatisation des taches: la capacit de ces langages de connecter d'autres applications est encore in gal e. Pourquoi chacun devrait-il apprendre programmer ou tre initi ? On est de plus en plus d pendants des logiciels. Quand on utilise une lave-vaisselle, on conduit une voiture, on est soign dans un h pital, quand on communique sur un r seau social, ou on surfe le Web, nos activit s sont constamment ex cut es par des logiciels. Celui qui contr le ces logiciels contr le nos vies. Comme citoyens d'un monde qui est de plus en plus num rique, pour ne pas devenir des esclaves 2.0, nous devons pr tendre le contr le sur le logiciel qui nous entoure. Pour y parvenir, le Logiciel Libre---qui nous permet d'utiliser, tudier, modifier, reproduire le logiciel sans restrictions---est un ingr dient indispensable. Aussi bien qu'une vaste diffusion des comp tences en programmation: chaque bit de connaissance dans ce domaine nous rende tous plus libres.

12 September 2014

John Goerzen: The Thrill and Stress of Too Many Hobbies

Today, 4PM. Jacob and Oliver excitedly peer at the box in our kitchen a really big box, taller than them. Inside is is the first model airplane I d ever purchased. The three of us hunkered down on the kitchen floor, opened the box, unpacked the parts, examined the controller, and found the manual with cryptic assembly directions. Oliver turned some screws while Jacob checked out the levers on the controllers. Then they both left for a bit to play with their toy buses. A little while later, the three of us went outside. It was too windy to fly. I had never flied an RC plane before only RC quadcopters (much easier to fly), and some practice time on an RC simulator. But the excitement was too much. So out we went, and the plane took off perfectly, climbed, flew over the trees, and circled above our heads at my command. I even managed a good landing in the wind, despite about 5 aborted attempts due to coming in too high, wrong angle, too fast, or last-minute gusts of wind throwing everything off. I am not sure how I pulled that all off on my first flight, but somehow I did! It was thrilling! I ve had a lot of hobbies in my life. Computers have run through many of them; I learned Pascal (a programming language) at about the same time I learned cursive handwriting and started with C at around age 10. It was all fun. I ve been a Debian developer for some 18 years now, and have written a lot of code, and even books about code, over the years. Photography, music, literature, history, philosophy, and theology have been interests for quite some time as well. In the last few years, I ve picked up amateur radio, model aircraft, etc. And last month, Laura led me into Ada s Technical Books during our visit to Seattle, resulting in me getting interested in Arduino. (The boys and I have already built a light-activated crossing gate for their HO-gauge model trains, and Jacob can now say he s edited a few characters of C!) Sometimes I find ways to merge hobbies; I ve set up all sorts of amateur radio systems on Linux, take aerial photographs, and set up systems to stream music in my house. But I also have a lot less time for hobbies overall than I once did; other things in life, such as my children, are more important. Some of the code I once worked on actively I no longer use or maintain, and I feel guilty about that when people send bug reports that I have no interest in fixing anymore. Sometimes I feel a need to cut down, and perhaps have; and then, I get an interest in RC aircraft and find an airplane that is great for a beginner and fairly inexpensive. Perhaps it is the curse of being a curious person living in an interesting world. Do any of the rest of you have a large number of hobbies? How do you feel about that?

25 March 2014

Sylvain Le Gall: Release of OASIS 0.4.3

I am happy to announce the release of OASIS v0.4.3. Logo OASIS small OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily. This tool is freely inspired by Cabal which is the same kind of tool for Haskell. You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website. Here is a quick summary of the important changes: Features: This new version closes 4 bugs, mostly related to parsing of _oasis. It also includes a lot of refactoring to improve the overall quality of OASIS code base. The big project for the next release will be to setup a Windows host for regular build and test on this platform. I plan to use WODI for this setup. I would like to thanks again the contributor for this release: David Allsopp, Martin Keegan and Jacques-Pascal Deplaix. Their help is greatly appreciated.

25 July 2013

John Goerzen: Development Freedom in New Platforms

I started writing code when I was somewhere around 1st grade, hacking some rudimentary BASIC on a TRS-80. Since then, I ve used more programming languages than I can count; everything from C to Prolog, Python to Haskell, and probably a dozen others. I enjoy diversity in programming languages and tools. I ve had an interest in both web and mobile development for some time, but have done little with either. Reflecting on that lately, I realize that both platforms severely restrict my freedom to use the tools I choose. Even on an old x86 with 640K of RAM, I had choices; BASIC, Pascal, and C would work under DOS. (Plus assembler, but I had yet to learn that at the time.) On Android, my choices are Java or Java. (OK, or something that compiles to Java, but those selections are still limited.) Plus it seems like I d have to use Eclipse, which seems to have taken the kitchen sync crown from Emacs long ago, and not in a good way. On iOS, the choices are Objective C. And on the web, it s JavaScript. The problem with all this is that there s little room for language innovation on those platforms. If predictions about the decline of development on the PC pan out, how will the next advance in programming languages gain traction if the hot spots for development are locked down to three main languages? Will we find ourselves only willing to consider languages that can compile to Java/Dalvik bytecode? And won t that put limits on the possible space for innovation to occur within? Yes, I know about things like Mono and the Android Scripting Environment, which offer some potential for hope, but I think it s still safe to say that these platforms are pretty closed to development in unofficial languages. I don t know of major apps in the Android market written in Python, for instance. I know of efforts such as Ubuntu s, but even they simply lock down development to a different toolkit. There is no choice there of GTK vs. Qt, or such things. Sadly, I don t think it has to be this way. My phone has way more power than my computer did just a few years ago, and that computer was capable of not just executing, but even compiling, code written in all sorts of different languages. But I don t know if there s much we can do to change things. Even with native code on Android, plenty still has to go through Dalvik (at least that s my understanding). If you wanted to write a Python app with a full graphical touch interface on Android, I don t think this would be all that easy of a task.

10 February 2013

David Bremner: First Steps with Argyll and ColorHug

In April of 2012 I bought a ColorHug colorimeter. I got a bit discouraged when the first thing I realized was that one of my monitors needed replacing, and put the the colorhug in a drawer until today. With quite a lot of help and encouragment from Pascal de Bruijn, I finally got it going. Pascal has written an informative blog post on color management. That's a good place to look for background. This is more of a "write down the commands so I don't forget" sort of blog post, but it might help somebody else trying to calibrate their monitor using argyll on the command line. I'm not running gnome, so using gnome color manager turns out to be a bit of a hassle. I run Debian Wheezy on this machine, and I'll mention the packages I used, even though I didn't install most of them today.
  1. Find the masking tape, and tear off a long enough strip to hold the ColorHug on the monitor. This is probably the real reason I gave up last time; it takes about 45minutes to run the calibration, and I lack the attention span/upper-arm-strength to hold the sensor up for that long. Apparently new ColorHugs are shipping with some elastic.
  2. Update the firmware on the colorhug. This is a gui-wizard kindof thing.
     % apt-get install colorhug-client
     % colorhug-flash 
    
  3. Set the monitor to factory defaults. On this ASUS PA238QR, that is brightness 100, contrast 80, R=G=B=100. I adjusted the brightness down to about 70; 100 is kindof eye-burning IMHO.
  4. Figure out which display is which; I have two monitors.
     % dispwin -\?
    
    Look under "-d n"
  5. Do the calibration. This is really verbatim from Pascal, except I added the ENABLE_COLORHUG=true and -d 2 bits.
     % apt-get install argyll
     % ENABLE_COLORHUG=true dispcal -v -d 2 -m -q m -y l -t 6500 -g 2.2 test
     % targen -v -d 3 -G -f 128 test
     % ENABLE_COLORHUG=true dispread -v -d 2 -y l -k test.cal test
     % colprof -v -A "make" -M "model" -D "make model desc" -C   "copyright" -q m -a G test
    
  6. Load the profile
     % dispwin -d 2 -I test.icc           
    
    It seems this only loads the x property _ICC_PROFILE_1 instead of _ICC_PROFILE; whether this works for a particular application seems to be not 100% guaranteed. It seems ok for darktable and gimp.

30 December 2012

Keith Packard: MicroPeakSerial

MicroPeak Serial Interface Flight Logging for MicroPeak MicroPeak was original designed as a simple peak-recording altimeter. It displays the maximum height of the last flight by blinking out numbers on the LED. Peak recording is fun and easy, but you need a log across apogee to check for unexpected bumps in baro data caused by ejection events. NAR also requires a flight log for altitude records. So, we wondered what could be done with the existing MicroPeak hardware to turn it into a flight logging altimeter. Logging the data The 8-bit ATtiny85 used in MicroPeak has 8kB of flash to store the executable code, but it also has 512B (yes, B as in bytes ) of eeprom storage for configuration data. Unlike the code flash, the little eeprom can be rewritten 100,000 times, so it should last for a lifetime of rocketry. The original MicroPeak firmware already used that to store the average ground pressure and minimum pressure (in Pascals) seen during flight; those are used to compute the maximum height that is shown on the LED. If we store just the two low-order bytes of the pressure data, we d have room left for 251 data points. That means capturing data at least every 32kPa, which is about 3km at sea level. 251 points isn t a whole lot of storage, but we really only need to capture the ascent and arc across apogee, which generally occurs within the first few seconds of flight. MicroPeak samples air pressure once every 96ms, if we record half of those samples, we ll have data every 192ms. 251 samples every 192ms captures 48 seconds of flight. A flight longer than that will just see the first 48 seconds. Of course, if apogee occurs after that limit, MicroPeak will still correctly record that value, it just won t have a continuous log. Downloading the data Having MicroPeak record data to the internal eeprom is pretty easy, but it s not a lot of use if you can t get the data into your computer. However, there aren t a whole lot of interfaces avaialble on MicroPeak. We ve only got: First implementation I changed the MicroPeak firmware to capture data to eeprom and made a test flight using my calibrated barometric chamber (a large syringe). I was able to read out the flight data using the AVR programming pins and got the flight logging code working that way. The plots I created looked great, but using an AVR programmer to read the data looked daunting for most people as it requires: With the hardware running at least $120 retail, and requiring a pile of software installed from various places around the net, this approach didn t seem like a great way to let people easily capture flight data from their tiny altimeter. The Blinking LED The only other interface available is the MicroPeak LED. It s a nice LED, bright and orange and low power. But, it s still just a single LED. However, it seemed like it might be possible to have it blink out the data and create a device to watch the LED and connect that to a USB port. The simplest idea I had was to just blink out the data in asynchronous serial form; a start bit, 8 data bits and a stop bit. On the host side, I could use a regular FTDI FT230 USB to serial converter chip. Those even have a 3.3V regulator and can supply a bit of current to other components on the board, eliminating the need for an external power supply. To see the LED blink, I needed a photo-transistor that actually responds to the LED s wavelength. Most photo-transistors are designed to work with infrared light, which nicely makes the whole setup invisible. There are a few photo-transistors available which do respond in the visible range, and ROHM RPM-075PT actually has its peak sensitivity right in the same range as the LED. In between the photo-transistor and the FT230, I needed a detector circuit which would send a 1 when the light was present and a 0 when it wasn t. To me, that called for a simple comparator made from an op-amp. Set the voltage on the negative input to somewhere between light and dark and then drive the positive input from the photo-transistor; the output would swing from rail to rail. Bit-banging async The ATtiny85 has only a single serial port , which is used on MicroPeak to talk to the barometric sensor in SPI mode. So, sending data out the LED requires that it be bit-banged directly modulated with the CPU. I wanted the data transmission to go reasonably fast, so I picked a rate of 9600 baud as a target. That means sending one bit every 104 S. As the MicroPeak CPU is clocked at only 250kHz, that leaves only about 26 cycles per bit. I need all of the bits to go at exactly the same speed, so I pack the start bit, 8 data bits and stop bit into a single 16 bit value and then start sending. Of course, every pass around the loop would need to take exactly the same number of cycles, so I carefully avoided any conditional code. With that, 14 of the 26 cycles were required to just get the LED set to the right value. I padded the loop with 12 nops to make up the remaining time. At 26 cycles per bit, it s actually sending data at a bit over 9600 baud, but the FT230 doesn t seem to mind. A bit of output structure I was a bit worried about the serial converter seeing other light as random data, so I prefixed the data transmission with MP ; that made it easy to ignore anything before those two characters as probably noise. Next, I decided to checksum the whole transmission. A simple 16-bit CRC would catch most small errors; it s easy enough to re-try the operation if it fails after all. Finally, instead of sending the data in binary, I displayed each byte as two hex digits, and sent some newlines along to keep the line lengths short. This makes it easy to ship flight logs in email or whatever. Here s a sample of the final data format:
MP
dc880100fec000006800f56d8f63b059
73516447273fa93728301927d91b7712
730bbf0491fe88f7c5ee8ee896e3fadc
9dd9d3d502d1afcea2cbafc6b4c34ec1
bfbfcabf10c03dc05dc070c084c08fc0
9cc0abc0b9c0c1c0ccc0dcc020c152c4
71c9a6cf45d623db7de05ee758edd9f2
b4f9fd00aa074311631a9221c4291330
c035873b2943084bbb52695c0c67eb6b
d26ee5707472fb74a4781f7dee802b84
09860a87e786ad868a866e8659865186
4e8643863e863986368638862e862d86
2f862d86298628862a86268629862686
28862886258625862486
d925
Making the photo-transistor go fast enough The photo-transistor acts as one half of a voltage divider on the positive op-amp terminal, with a resistor making the other half. However, the photo-transistor acts a bit like a capacitor, so when I initially chose a fairly large value for the resistor, it actually took too long to switch between on and off the transistor would spend a bunch of time charging and discharging. I had to reduce the resistor to 1k for the circuit to work. Remaining hardware design I prototyped the circuit on a breadboard using a through-hole op-amp that my daughter designed into her ultrasonic guided robot and a prefabricated FTDI Friend board. I wanted to use the target photo-transistor, so I soldered a couple of short pieces of wire onto the SMT pads and stuck that into the breadboard. Once I had that working, I copied the schematic to gschem, designed a board and had three made at OSHPark for the phenomenal sum of $1.35. Aside from goofing up on the FT230 USB data pins (swapping D+ and D-), the board worked perfectly. The final hardware design includes an LED connected to the output of the comparator that makes it easier to know when things are lined up correctly, otherwise it will be essentially the same. Host software Our AltosUI code has taught us a lot about delivering code that runs on Linux, Mac OS X and Windows, so I m busy developing something based on the same underlying Java bits to support MicroPeak. Here s a sample of the graph results so far: Production plans I ve ordered a couple dozen raw boards from OSH Park, and once those are here, I ll build them and make them available for sale in a couple of weeks. The current plan is to charge $35 for the MicroPeak serial interface board, or sell it bundled with MicroPeak for $75.

1 November 2012

Antoine Beaupr : My short computing history

This is a long story. I wanted to document which tools I used on my desktop, but then realized it was difficult to do this without an historical context. This therefore, became a history of my computer usage and programming habits.

Before the desktop: vic 20 and apple My first contacts with computers were when my uncle showed me his Apple computers at his home. He was working at UQAM and had access to all sorts of crazy computers. My memory is vague, but I believe he showed me a turtle moving around the screen, which I suspect was Logo, and some rabbit game which I was quite fond of. Later, I hooked up with a Vic-20 (similar to a Commodore 64) and typed in my first programs. Back then, you would painstakingly transcribe the program from a book into the screen, type RUN at the end and really hope that you didn't make a typo. You could also read and write data off audio tape with an external tape drive, which was really amazing. To put things into perspective, the Vic-20 would plug into your television for a display, and the computer was "in the keyboard". Old school shit. The language was BASIC, which was awful. Later, we got luxurious and had the privilege of a Macintosh Plus computer. 3000$ plus the printer, that was quite something. Still, that modern system could run Word, had games, and a 9" black and white screen. But the best part, and where I really started to learn to program, is with Hypercard, the crazy software written by Bill Atkinson that became the inspiration for wikis, the Myst computer game and, to a certain extent, popularized the concept of hyperlinking, which is the basic principle of the World Wide Web.

The early years: afterstep and FreeBSD But my real computer life all started with FreeBSD 2.2.x (probably 2.2.5), around 1997, when I installed my first FLOSS operating system. This was just amazing: source code for everything, down to the compilers themselves, I was amazed, since I was coming from a Mac (still had that Mac Plus lying around) and Windows (95!) world. Back then, X11 (XFree86, it was called) was a fantasy. Something I would compile with the FreeBSD ports system and sometimes would be able to run. Window managers were some weird things I wasn't sure how to use, but I figured out twm (yuck!) but was still amazed. At the university, we had SGI workstations (remember the Indigo machines?) on which I started setting up the Afterstep window manager, which I used for a long time. The web browser was Netscape Communicator 4, and lo and behold, it also did email, although I mostly used Pine over a modem terminal to the university. Internet access at home was spotty and was dependent on students compiling discreetly copies of slirp which were allowing us, in a really clunky way, to run a PPP session over those terminal lines. That actually got my account suspended because I messed up my sendmail configuration and was sending emails that appeared to be coming as , which freaked out the sysadmins that be over there. The hardware was a Pentium 166Mhz with 32MB of ram and 20GB of disk space, which was a huge improvement over the 1MB of ram and 20MB of (external!) hard disk that the Mac Plus was proud of. By then, I started programming in more languages: I learned the basics of shell scripting, C programming and I had learned Pascal in a college class.

The real freedom: Debian and... enlightenment? I then switched to Debian, thanks to my friend scyrma, which showed me the power of dselect which, for a FreeBSD user, was really impressive. Imagine that: you could just install software and all dependencies without compiling anything! ;) I don't exactly remember when that was, but it was probably Woody. I probably still used afterstep, but at that time, changing window managers was like changing t-shirts and we would waste countless hours fiddling around with borders and background images and other silly things. I remember running Enlightenment and writing software for it, for example, but not much else. Back then, email was still over pine, but a new web browser was rearing its ugly head, Mozilla, and was quickly trimmed down to Phoenix, shortly lived as Mozilla Firebird, only to be finally renamed Mozilla Firefox, and was unleashed as a stable release to an unsuspecting Internet Explorer and Microsoft-dominated world. Throughout this chaos, I also used various Mozilla derivatives like Galeon since all of this was pretty unstable and heavy for my poor computers. This was probably around 2002 to 2005. By then, I had formal classes on programming languages, was forced to learn Java, discovered C++, and learned more C as the university assumed that you knew that language in the data structures and algorithm class, which some students really didn't find funny at all. I also learned some Prolog and Haskell back then. Through my experiences with free software, I discovered various eclectic programming languages like TCL, and kept an eye out for other languages like lisp (thanks to Emacs), Forth (thanks to the FreeBSD boot loader), HTML/CSS/Javascript (no thanks to the web). There are probably others I am forgetting here, voluntarily or not.

The years of the desktops: rox, sawfish and gnome Another stable increment on my desktop was the use of the sawfish window manager and the rox desktop, mostly because of the awesome Rox file manager that had no equal in the rest of the free software world. Gnome was an atrocity and KDE even more so: bloated and unstable. Eventually, however, KDE and Gnome not only matured, but simply took over the whole of the graphical environment psyche of the free world. People nowadays don't even know what a window manager is and assume that you now have a "desktop environment", which, for me, is a real desk with a bunch of papers, a radio transceiver and maybe a computer with a screen lying around. So I did use KDE, and then Gnome. I switched from KDE to Gnome because I felt that Gnome left me more space to run my own stuff without starting crazy daemons to preload everything. At this point, Firefox was the new thing and basically stable enough for daily use. This lasted until around 2009, I guess. On the programming side, I had picked up Perl because I had to learn it to work at Ericsson (don't ask) and learned PHP to pay my rent, which I consider one of the biggest professional mistakes I have made so far in that story. It is around that time that Koumbit was founded by me and other friends.

The tiles fall on the gnomes: awesome desktop and sysadmin programming At some point, friends (mostly comar and millette) introduced me to wmii as the best thing since sliced bread. I banged my head against it with great pains, but I really enjoyed the hands off approach to window management. Tiling window managers really got a hold on me there, and in fact we can say on the whole desktop, with recent Gnome 3 taking up some tiling concepts itself. I have since then switched to the Awesome Window Manager which is probably the worst name you can use for software (try googling for "Awesome crash" to get an idea). I have switched from Firefox to Chromium, after having repeated crashes from Firefox on a site that Chrome would load fine. I was expecting Chrome to crash only one tab, because of its design, and it turned out to not crash at all, and I have since given up on Firefox totally. I have also given up on desktop environments. While I have given them a chance in quick setups and still use them when forced to, I find them too bloated and single-minded for my needs. I want control over my desktop, and those tools work very hard to remove control and hide it behind user interfaces that treat me like a brain-dead dog. Finally, I have kept on programming in PHP for work, because that's what we do over there (unfortunately). Mostly because of Drupal and Aegir, in which I was and still am deeply involved. But I am quite fond of Python for my personnal project and still use shell scripting to get myself out of trouble or automate data processing from time to time. I have a rule though: once a program hits 100 lines of shell scripting, it needs to be rewritten, usually in Perl because the syntax is closer. During the years 2009 to 2012, I have started to work more and more on system administration however. In 2010, I became a Debian Developer and a lot of my work at Koumbit revolves around system administration, optimisation and maintenance. I work oncall and sometimes get called during the night to nurse servers (which is why I am up writing this right now).

The future of programming I have become seriously disgruntled with programming. I have seen the emergence of the web, where programmers felt it was relevant to rewrite 30 years of programs into the web framework (webmail? wtf?), and now we are seeing the same thing happen with mobile devices, the tracking devices they call phones and "pads", we should rather say. And again they are trying to make people pay for trivial software. I am tired of seeing our community reinvent the wheel every 10 years because of infatuated egos and marketing sprints. We are wasting great minds on this, a lot of energy and precious time. I would rather maintain 5 year old imperfect software than coordinate a 5 year project to rewrite that software into another imperfect software. That is what is happening right now, and it's going nowhere. So here's the current state of affairs, some perls, some tiles, but mostly system administration and less programming. Through all this time, the common pattern is that I have learned to solve problems and document my work, which are key concepts of the job of a system administrator, which is what I really am now. Somehow, that popped into my life like a natural fit. Maybe, one day, I will retire and become a shepherd in far away mountains, but I will always have in my mind the strong desire to fix shit up and make things right with the world. Maybe get involved in a project like Open Source Ecology and especially their Global Village Construction Set to make sure humankind survives all that stupidity.

9 August 2012

C.J. Adams-Collier: Linus on Instantiation and Armadaification

I feel a sense of pride when I think that I was involved in the development and maintenance of what was probably the first piece of software accepted into Debian which then had and still has direct up-stream support from Microsoft. The world is a better place for having Microsoft in it. The first operating system I ever ran on an 08086-based CPU was MS-DOS 2.x. I remember how thrilled I was when we got to see how my friend s 80286 system ran BBS software that would cause a modem to dial a local system and display the application as if it were running on a local machine. Totally sweet. When we were living at 6162 NE Middle in the nine-eight 292, we got an 80386 which ran Doom. Yeah, the original one, not the fancy new one with the double barrel shotgun, but it would probably run that one, too. It was also totally sweet and all thanks to our armadillo friends down south and partially thanks to their publishers, Apogee. I suckered my brothers into giving me their allowance from Dad one time so that we could all go in on a Sound Blaster Pro 16 sound card for the family s 386. I played a lot of Team Fortress and Q2CTF on that rig. I even attended the Quake 3 Arena launch party that happened at Zoid s place. I recall that he ported the original quake to Linux. I also recall there being naughty remarks included in the README.txt. When my older brother, Aaron turned 16, he was gifted a fancy car. When asked what type of car I would like when I turned 16, I said that I d prefer a computer instead. So I got a high-end 80486 with math co-processor. It could compile the kernel in 15 minutes flat. With all the bits turned on on in /usr/src/linux/.config. But this was later. I hadn t even heard of linux when I got my system. I wanted to be entertained by the thing. I made sure to get a CD-Rom and a sound card. I got on the beta for Ultima Online and spent a summer as a virtual collier. Digging stuff out of mines north of Britannia and hauling them to town to make weapons and armor out of them. And then setting out in said armor only to be PK d because I forgot healing potions and I was no good at fighting. While I was in the middle of all this gaming, my friend Lucas told me that I should try out this lynx thing that they run at the University of Washington. He heard that it was reported to run doom faster than it ran on MS-DOS. It turns out that it did, but that it was not, in fact, called lynx. Or pine. The Doom engine ran so fast that the video couldn t keep up. This was probably because they didn t use double buffering for frame display, since they didn t want to waste the time maintaining and switching context. I think I downloaded the boot/root 3.5 disk pair and was able to get the system to a shell with an on-phone assist from the Rev. I then promptly got lost in bash and the virtual terminals (OMG! I GET SIX CONSOLES!?) and bought a book on the subject. It shipped with slackware. Which I ran. Until Debian came along. Lucas also recommended that I try out this IRC thing, so I did. And I m still doing it on #linpeople just like I did back then. I learned to write Pascal on dos. Then I learned c while they were trying to teach me c++. I learned emacs and vi when I was attending North Kitsap High School. I learned sed and only a little awk when I took Running Start classes in Lynnwood at Edmonds Community College and perl & x.509 while attending Olympic Community College and simultaneously jr-administering Sinclair Communications. I studied TCP/IP, UNP, APUE, C and algorithms & data structures while preparing for an interview with a company whose CEO claimed to have invented SCSI. I learned PGP and PHP while writing web-based adware for this company. I didn t want to write ads and instead wanted to work in security, so took a job with Security Portal. While there, I wrote what one might call a blogging platform. It worked and made it possible for authors to write prose and poetry. Editors didn t have to manage a database in order to review and publish the posts that were ready. Everyone but me was able to avoid html and cgi. Then I sold pizza. Then I helped bring the bombay company onto the interwebs using the Amazon ECS (now AWS) platform. Then I helped support MaxDB. Then I helped develop and maintain the Amazon blogging platform. And then attempted to reduce the load on the Amazon pager system by doing and enforcing code reviews. It turns out that they prefer to run their support team at full bore and a load average of 16.
<iframe allowfullscreen="" frameborder="0" height="288" scrolling="no" src="http://www.youtube.com/embed/MShbP3OpASA" width="448"></iframe>
I am now, still, fully employed in an effort to make hard things possible. The hard thing we re working on now is the implementation and ongoing operations of distributed x.500 infrastructure. This includes request handling, processing and delivery of response ( la HTTP, SMTP, IMAP, SIP, RTP, RTSP, OCSP) including authentication, authorization and auditing (AAA) of all transactions. It s a hard thing to get right, but our product development team gets it right. Consistently and reliably. We make mistakes sometimes (sorry Bago), but we correct them and make the product better. I m the newest member of an R and d team (note: big R, little d) called NTR, which sits behind the firewall that is Product Development, out of production space. In a manner that reminds me of Debian Testing. We try new things. Our current project is to allow users to compare their current (cloud-based or iron-based) IT system with what their system would be like with a BIG-IP in front of it. I can probably come up with a demo if anyone s interested in checking it out. I ll go work on that now.

C.J. Adams-Collier: Linus on Instantiation and Armadaification

I feel a sense of pride when I think that I was involved in the development and maintenance of what was probably the first piece of software accepted into Debian which then had and still has direct up-stream support from Microsoft. The world is a better place for having Microsoft in it. The first operating system I ever ran on an 08086-based CPU was MS-DOS 2.x. I remember how thrilled I was when we got to see how my friend s 80286 system ran BBS software that would cause a modem to dial a local system and display the application as if it were running on a local machine. Totally sweet. When we were living at 6162 NE Middle in the nine-eight 292, we got an 80386 which ran Doom. Yeah, the original one, not the fancy new one with the double barrel shotgun, but it would probably run that one, too. It was also totally sweet and all thanks to our armadillo friends down south and partially thanks to their publishers, Apogee. I suckered my brothers into giving me their allowance from Dad one time so that we could all go in on a Sound Blaster Pro 16 sound card for the family s 386. I played a lot of Team Fortress and Q2CTF on that rig. I even attended the Quake 3 Arena launch party that happened at Zoid s place. I recall that he ported the original quake to Linux. I also recall there being naughty remarks included in the README.txt. When my older brother, Aaron turned 16, he was gifted a fancy car. When asked what type of car I would like when I turned 16, I said that I d prefer a computer instead. So I got a high-end 80486 with math co-processor. It could compile the kernel in 15 minutes flat. With all the bits turned on on in /usr/src/linux/.config. But this was later. I hadn t even heard of linux when I got my system. I wanted to be entertained by the thing. I made sure to get a CD-Rom and a sound card. I got on the beta for Ultima Online and spent a summer as a virtual collier. Digging stuff out of mines north of Britannia and hauling them to town to make weapons and armor out of them. And then setting out in said armor only to be PK d because I forgot healing potions and I was no good at fighting. While I was in the middle of all this gaming, my friend Lucas told me that I should try out this lynx thing that they run at the University of Washington. He heard that it was reported to run doom faster than it ran on MS-DOS. It turns out that it did, but that it was not, in fact, called lynx. Or pine. The Doom engine ran so fast that the video couldn t keep up. This was probably because they didn t use double buffering for frame display, since they didn t want to waste the time maintaining and switching context. I think I downloaded the boot/root 3.5 disk pair and was able to get the system to a shell with an on-phone assist from the Rev. I then promptly got lost in bash and the virtual terminals (OMG! I GET SIX CONSOLES!?) and bought a book on the subject. It shipped with slackware. Which I ran. Until Debian came along. Lucas also recommended that I try out this IRC thing, so I did. And I m still doing it on #linpeople just like I did back then. I learned to write Pascal on dos. Then I learned c while they were trying to teach me c++. I learned emacs and vi when I was attending North Kitsap High School. I learned sed and only a little awk when I took Running Start classes in Lynnwood at Edmonds Community College and perl & x.509 while attending Olympic Community College and simultaneously jr-administering Sinclair Communications. I studied TCP/IP, UNP, APUE, C and algorithms & data structures while preparing for an interview with a company whose CEO claimed to have invented SCSI. I learned PGP and PHP while writing web-based adware for this company. I didn t want to write ads and instead wanted to work in security, so took a job with Security Portal. While there, I wrote what one might call a blogging platform. It worked and made it possible for authors to write prose and poetry. Editors didn t have to manage a database in order to review and publish the posts that were ready. Everyone but me was able to avoid html and cgi. Then I sold pizza. Then I helped bring the bombay company onto the interwebs using the Amazon ECS (now AWS) platform. Then I helped support MaxDB. Then I helped develop and maintain the Amazon blogging platform. And then attempted to reduce the load on the Amazon pager system by doing and enforcing code reviews. It turns out that they prefer to run their support team at full bore and a load average of 16.
<iframe allowfullscreen="" frameborder="0" height="288" scrolling="no" src="http://www.youtube.com/embed/MShbP3OpASA" width="448"></iframe>
I am now, still, fully employed in an effort to make hard things possible. The hard thing we re working on now is the implementation and ongoing operations of distributed x.500 infrastructure. This includes request handling, processing and delivery of response ( la HTTP, SMTP, IMAP, SIP, RTP, RTSP, OCSP) including authentication, authorization and auditing (AAA) of all transactions. It s a hard thing to get right, but our product development team gets it right. Consistently and reliably. We make mistakes sometimes (sorry Bago), but we correct them and make the product better. I m the newest member of an R and d team (note: big R, little d) called NTR, which sits behind the firewall that is Product Development, out of production space. In a manner that reminds me of Debian Testing. We try new things. Our current project is to allow users to compare their current (cloud-based or iron-based) IT system with what their system would be like with a BIG-IP in front of it. I can probably come up with a demo if anyone s interested in checking it out. I ll go work on that now.

28 July 2012

John Goerzen: How to get started programming?

I have been asked for advice from several people recently on how to get started programming, or how to further develop a nascent interest in coding or software engineering. The people asking the questions range in age from about 10 years old to older than me. These are people that, for various reasons, are not very easily able to take computer science courses right now. One would think that, since I ve been doing this for somewhere around a quarter century (oh I do feel old now), that I d be ready to offer up some great advice. And offer some suggestions I have. But I m not convinced they re good ones. I have two main tensions. The first is that I, like many in the communities I tend to hang out in such as Debian s, have a personality that leads me to take a deep dive into details of anything that holds my interest. Whether it s Linux, Haskell, or amateur radio, I want to do more than skim the surface if I m having fun with it. Many people are not like that. They may have a lot of fun programming in Visual Basic, not really caring that other languages are out there. Or some people are not like this yet. I feel unqualified to provide good advice to people that are different from me in that way. To put it a different way: most people don t want to wait 4 years to be useful, and want to start out right away and get better over time (and I was the same way too.) The second is related. I learned programming at a time when, other than BASIC, interpreted languages were not really available to me. (Yes, they were available, but not to me.) I cut my teeth on BASIC, Pascal, and C. Although I rarely use C anymore, I can still drop into it at a moment s notice and be perfectly comfortable. I feel it was a fundamentally valuable experience, and that it would be very hard to become a great programmer without ever having lived and breathed something like C, where memory and pointers must be managed manually. Having said that, it is probably possible to become a good coder without ever having touched C. Here, then, is an edited version of some rambly advice I sent to someone recently, where learning OOP was particularly mentioned. I would welcome your comments and suggestions. I may point people that ask to this post in the future. For simply learning how to write code, Dive Into Python has long been a decent resource, though it may assume more experience than some have. I haven t read them myself, but I ve also heard good things about the How to Think Like a Computer Scientist series from Green Tea Press. They re all available as free PDF downloads, too! Eric S. Raymond s The Art of Unix Programming is another work I ve heard good things about, despite having never read it myself. A quick glance at the table of contents makes me think that even if people don t wind up working on Unix, the lessons and philosophy should be informative. It seems that many Computer Science programs are using Java for the core of their instruction, or even almost exclusively. Whether that is good or bad, I m not completely sure. It certainly gets people into OOP more deeply, but I m a right tool for the job kind of person. Despite the hype, OO like everything else isn t the right tool for every job. It is fine for people to dive straight into OO and become good programmers/engineers. However, I think it would be difficult to become a great programmer/engineer without ever having a solid understanding of a more low-level language, such as C in particular. I did my CS work when it was mostly based in C, and am glad for it. If someone never has to manage memory or pointers, I suspect they will be at a disadvantage in the long run for not being able to understand or work with the system at a more fundamental level. If a person knows C, plus some concepts of OO and Functional Programming (FP), it should be easy to pick up just about any other language out there. I used to think Python was a great first language, but during the 2.x series they added so much fluff and so many special cases that I m less enthusiastic now, though I don t know how much of that got cleaned up in 3.x. I am not too keen on Java as a first language, because too many things that should be simple aren t. I have a fondness for Haskell, and its close relationship to mathematics could make it a great first language or maybe a poor one, depending on your perspective. One other thing I think it s important for good programmers to have experience with all three major models of programming (procedural, OO, functional.) Even if a person winds up working mostly in one universe, knowledge of and experience with the others is important and informative and, in my experience, leads to better algorithms and architecture all around.

5 June 2012

Axel Beckert: Finding similar but not identical files

There are quite some tools to find duplicate files in Debian and depending on the task I use either hardlink (see this blog posting), fdupes (if I need output with all identical files on one line; see example below), or duff (if it has to be performant). But for code deduplication in historically grown code you sometimes need a tool which does not only find identical files, but also those which just differ in a few blanks or blank lines. I found two tools in Debian which can give you some kind of percentage of similarity: simhash (which is btw. orphaned; upstream homepage) and similarity-tester (upstream homepage). simhash has the shorter name and hecne sounds more usable on the command-line. But it seems only be able to compare two files at once and also only after first computing and writing down its similarity hash to a file. Not really usable for those one-liner cases on the command-line. similarity-tester has the longer name (and one which made me suspect that it may be a GUI tool), but provides what I was looking for:
$ find . -type f   sim_text -ipTt 75
This lists all files in the current directory which have at 75% ( -t 75 ) in common with another file in the list of files. The option -i causes sim_text to read the files to compare from standard input; -p causes sim_text to just output the similarity percentage; and -T suppresses the per-file list of found tokens. I used similarity-tester s sim_text tool to compare natural langauge as most of the files, I had to test, are shell scripts. But similarity-tester also provides tools to test the similarity of code in specific programming languages, namely C, Java, Pascal, Modula-2, Lisp and Miranda. Example output from the xen-tools project (after I already did a lot of code deduplication):
./intrepid/30-disable-gettys consists for 100 % of ./edgy/30-disable-gettys material
./edgy/30-disable-gettys consists for 100 % of ./intrepid/30-disable-gettys material
./common/90-make-fstab-rpm consists for 98 % of ./centos-5/90-make-fstab material
./centos-5/90-make-fstab consists for 98 % of ./common/90-make-fstab-rpm material
./gentoo/55-create-dev consists for 91 % of ./dapper/55-create-dev material
./dapper/55-create-dev consists for 90 % of ./gentoo/55-create-dev material
./gentoo/55-create-dev consists for 88 % of ./common/55-create-dev material
./common/90-make-fstab-deb consists for 87 % of ./common/90-make-fstab-rpm material
./common/90-make-fstab-rpm consists for 85 % of ./common/90-make-fstab-deb material
./common/30-disable-gettys consists for 81 % of ./karmic/30-disable-gettys material
./intrepid/80-install-kernel consists for 78 % of ./edgy/80-install-kernel material
./edgy/30-disable-gettys consists for 76 % of ./karmic/30-disable-gettys material
./karmic/30-disable-gettys consists for 76 % of ./edgy/30-disable-gettys material
./common/50-setup-hostname-rpm consists for 76 % of ./gentoo/50-setup-hostname material
Depending on the length of possible filenames and amount of files this can be made more readable using the column utility from the bsdmainutils package and reversed by using tac from the coreutils package:
$ find . -type f   sim_text -ipTt 75   tac   column -t
./common/50-setup-hostname-rpm  consists  for  76   %  of  ./gentoo/50-setup-hostname    material
./karmic/30-disable-gettys      consists  for  76   %  of  ./edgy/30-disable-gettys      material
./edgy/30-disable-gettys        consists  for  76   %  of  ./karmic/30-disable-gettys    material
./intrepid/80-install-kernel    consists  for  78   %  of  ./edgy/80-install-kernel      material
./common/30-disable-gettys      consists  for  81   %  of  ./karmic/30-disable-gettys    material
./common/90-make-fstab-rpm      consists  for  85   %  of  ./common/90-make-fstab-deb    material
./common/90-make-fstab-deb      consists  for  87   %  of  ./common/90-make-fstab-rpm    material
./gentoo/55-create-dev          consists  for  88   %  of  ./common/55-create-dev        material
./dapper/55-create-dev          consists  for  90   %  of  ./gentoo/55-create-dev        material
./gentoo/55-create-dev          consists  for  91   %  of  ./dapper/55-create-dev        material
./centos-5/90-make-fstab        consists  for  98   %  of  ./common/90-make-fstab-rpm    material
./common/90-make-fstab-rpm      consists  for  98   %  of  ./centos-5/90-make-fstab      material
./edgy/30-disable-gettys        consists  for  100  %  of  ./intrepid/30-disable-gettys  material
./intrepid/30-disable-gettys    consists  for  100  %  of  ./edgy/30-disable-gettys      material
Compared to that, fdupes only finds the two 100% identical files:
$ fdupes -r1 . 
./intrepid/30-disable-gettys ./edgy/30-disable-gettys 
But fdupes helped me already a lot to find the first bunch of identical files in xen-tools. :-)

22 November 2011

Thorsten Glaser: Those small nice tools we all write

This is both a release announcement for the next installment of The MirBSD Korn Shell, mksh R40b, and a follow-up to Sune s article about small tools of various degrees of usefulness. I hope I don t need to say too much about the first part; mksh(1) is packaged in a gazillion of operating environments (dear Planet readers, that of course includes Debian, which occasionally gets a development snapshot; I ll wait uploading R40c until that two month fixed gcc bug will finally find its way into the packages for armel and armhf. Ah, we re getting Arch Linux (after years) to include mksh now. (Probably because they couldn t stand the teasing that Arch Hurd included it one day after having been told about its existence, wondering why it built without needing patches on Hurd ) MSYS is a supposedly supported target now, people are working on WinAPI and DJGPP in their spare time, and Cygwin and Debian packagers have deprecated pdksh in favour of mksh (thanks!). So, everything looking well on that front. I ve started a collection of shell snippets some time ago, where most of those small things of mine ends up. Even stuff I write at work we re an Open Source company and can generally publish under (currently) AGPLv3 or (if extending existing code) that code s licence. I chose git as SCM in that FusionForge instance so that people would hopefully use it and contribute to it without fear, as it s hosted on my current money source s servers. (Can just clone it.) Feel free to register and ask for membership, to extend it (only if your shell-fu is up to the task, KNOPPIX-style scripts would be a bad style(9) example as the primary goal of the project is to give good examples to people who learn shell coding by looking at other peoples code). Maybe you like my editor, too? At OpenRheinRuhr, the Atari people sure liked it as it uses WordStar like key combinations, standardised across a lot of platforms and vendors (DR DOS Editor, Turbo Pascal, Borland C++ for Windows, ) ObPromise: a posting to raise the level of ferrophility on the Planet aggregators this wlog reaches (got pix)

10 December 2010

Jonathan McDowell: Why Linux? (Part 2: Efficiency)

(This is part of a series of posts on Why Linux?) My first PC was an Amstrad PPC640D; an 8088 with twin 720k 3.5" disk drives. It never ran Windows (3.0 was current at that point in time and I don't think it would manage to run off a single floppy), so ran DOS. I moved on to an 8086 desktop machine, complete with 10M full height 5.25" HDD and CGA graphics. It still runs DOS. From there I moved to an 80386DX-40 desktop, with 4M RAM and SVGA graphics. A massive leap forward, and something actually capable of running more than DOS. Except I didn't. I had a Windows for Workgroups 3.11 install, but mostly I still did what I needed to from DOS. The machine was never networked; it had a modem attached but that was used to connect to Fidonet which was well serviced by DOS tools. I put Linux on the box at one point but it was C and TCP/IP and I was Pascal and Fido in those days, so I didn't really know what to make of it. Fast forward a few years and I'm still mostly using DOS, but I'm on a 486 and am running a separate machine as a BBS. It's using RemoteAccess under DOS and it feels like with 486 hardware I should be able to do some of this multitasking lark. I try OS/2 and Windows 95, but both end up dropping modem data when doing other things (I've moved on to ISDN at this point). Maybe I just needed to tweak things more, but I deem multitasking with BBS software a failure and go back to DOS and 2 machines. When I went to university one of my new course mates had brought a machine running Linux, and various of the older students are already running it. There was a wealth of information and interest available to me. So I try again. I've learnt C and TCP/IP networking since last time, and suddenly it all makes more sense and I'm able to do more with it (I'm sure a summer using HP/UX on my desktop at Nortel helped). And, as I finally get to the point, it's efficient. It makes use of the extra memory in the machine that DOS can only touch with kludges. It allows me to multitask in a usable fashion. I don't feel the need for a GUI so I don't have to run one, which no doubt helps, but everything that's running is easily visible and tunable. I start running it as my desktop at university and convert the BBS over to it at some point soon after. It doesn't drop modem data. I rejoice, and don't look back. In those days I was running hardware that was probably at the low end of what the popular multitasking OSes wanted (I remember seeing Win95 on a 4M machine when it first came out, and it crawled). These days my main machines are (I would hope) more than capable of running Windows well. The efficiency angle is still an appeal of Linux though; for example in the server space I don't understand why you'd want to run something with the overhead of an always on GUI (people who leave Linux servers running GDM confuse me). I want to be able to do everything I need on a server remotely, be that via SSH or, in a pinch, a serial console. I don't want to sit at the box and use a GUI. Linux lets me run only what I actually need on the machine.

1 November 2010

Michal &#268;iha&#345;: Cleaning up the web

After recent switch of this blog to Django, I've also planned to switch rest of my website to use same engine. Most of the code is already written, however I've found some forgotten parts, which nobody used for really a long time. As most of the things there is free software, I don't like idea removing it completely, however it is quite unlikely that somebody will want to use these ancient things. Especially when it quite lacks documentation and I really forgot about that (most of them being Turbo Pascal things for DOS, Delphi components for Windows and so on). Anyway it will probably live only on dl.cihar.com in case anybody is interested.

Filed under: Django Website 0 comments Flattr this

24 July 2010

Andrew Pollock: [geek] Cleaning up from 20 years ago

I'm a terrible hoarder. I hang onto old stuff because I think it might be fun to have a look at again later, when I've got nothing to do. The problem is, I never have nothing to do, or when I do, I never think to go through the stuff I've hoarded. As time goes by, the technology becomes more and more obsolete to the point where it becomes impractical to look at it. Today's example: the 3.5" floppy disk. I've got a disk holder thingy with floppies in it dating back to the mid-nineties and earlier. Stuff from high school, which I thought might be a good for a giggle to look at again some time. In the spirit of recording stuff before I throw it out, I present the floppy disks I'm immediately tossing out.
MS-DOS 6.2 and 6.22
Ah the DOS days. I remember excitedly looking forward to new versions of MS-DOS to see what new features they brought. I remember DOS 5.0 being the revolutionary one. The dir command grew a ton of options.
XTreeGold
More from the DOS days, when file management was such a pain in the arse that there was a business model to do it better. ytree seems like a fairly good looking clone of it for Linux.
WinZip for Windows 95, Windows NT and Windows 3.1
Ha. I actually paid money for an official WinZip floppy disk.
Nissan Maxima Electronic Brochure
I'm amazed this fit on a floppy disk
Turbo Pascal 6.0
Excluding GW-BASIC, this was the first "real" language I dabbled in. I learned it in Information Processing & Technology in grades 11 and 12. I never got into the OO stuff that version 6.0 was particularly geared towards.
Where in the World is Carmen Sandiego?
Awesome educational game. I was first introduced to this on the Apple ][, and loved it. This deserves being resurrected for a console.
Captain Comic II
Good sequel to the original, but I never found a version that worked properly (I could never convince it to let me finish it)
HDM IV
Ah, Hard Disk Menu. A necessity from the DOS days when booting up to a C:\> prompt just really didn't cut it. I used to love customising this thing.
ARJ, LHA, PK-ZIP
Of course, you needed a bazillion different decompression programs back in the days of file trading. I guess things haven't changed much with Linux. There's gzip, bzip2, 7zip, etc.
Zeliard
I wasted so many hours playing this. The ending was so hard.
MicroSQL
This was some locally produced software from Brisbane, written in Turbo Pascal (I think). It was a good introduction to SQL, I used it in high school and my first stab at University.
DOOM and DOOM II
Classics. I don't seem to have media for it any more, but I also enjoyed playing Heretic and Hexen. Oooh, Hexen has been ported to Linux? Must check that out...
SimCity 2000
I wasn't a big fan of this game, but I liked the isometric view that 2000 had, compared to the previous version.

19 May 2010

John Goerzen: Time to learn a new language

I have something of an informal goal of learning a new programming language every few years. It s not so much a goal as it is something of a discomfort. There are so many programming languages out there, with so many niches and approaches to problems, that I get uncomfortable with my lack of knowledge of some of them after awhile. This tends to happen every few years. The last major language I learned was Haskell, which I started working with in 2004. I still enjoy Haskell and don t see anything displacing it as my primary day-to-day workhorse. Yet there are some languages that I d like to learn. I have an interest in cross-platform languages; one of my few annoyances with Haskell is that it can t (at least with production quality) be compiled into something like Java bytecode or something else that isn t architecture-dependent. I have long had a soft spot for functional languages. I haven t had such a soft spot for static type checking, but Haskell s type inference changed that for me. Also I have an interest in writing Android apps, which means some sort of Java tie-in would be needed. Here are my current candidates: Of some particular interest to me is that Haskell has interpreters for Scheme, Lua, and JavaScript as well as code generators for some of these languages (though not generic Haskell-to-foo compilers). Languages not in the running because I already know them include: OCaml, POSIX shell, Python, Perl, Java, C, C++, Pascal, BASIC, Common Lisp, Prolog, SQL. Languages I have no interest in learning right now include Ruby (not different enough from what I already know plus bad experiences with it), any assembly, anything steeped in the Microsoft monoculture (C#, VB, etc.), or anything that is hard to work with outside of an Emacs or vim environment. (If your language requires or strongly encourages me to use your IDE or proprietary compiler, I m not interested that means you, flash.) Brief Reivews of Languages I Have Used To give you a bit of an idea of where I m coming from:

13 January 2010

Matt Brubeck: Finding SI unit domain names with Node.js

I'm working on some ideas for finance or news software that deliberately updates infrequently, so it doesn't reward me for checking or reloading it constantly. I came up with the name "microhertz" to describe the idea. (1 microhertz once every eleven and a half days.) As usual when I think of a project name, I did some DNS searches. Unfortunately "microhertz.com" is not available (but "microhertz.org" is). Then I went off on a tangent and got curious about which other SI units are available as domain names. This was the perfect opportunity to try node.js so I could use its asynchronous DNS library to run dozens of lookups in parallel. I grabbed a list of units and prefixes from NIST and wrote the following script:
var dns = require("dns"), sys = require('sys');
var prefixes = ["yotta", "zetta", "exa", "peta", "tera", "giga", "mega",
  "kilo", "hecto", "deka", "deci", "centi", "milli", "micro", "nano",
  "pico", "femto", "atto", "zepto", "yocto"];
var units = ["meter", "gram", "second", "ampere", "kelvin", "mole",
  "candela", "radian", "steradian", "hertz", "newton", "pascal", "joule",
  "watt", "colomb", "volt", "farad", "ohm", "siemens", "weber", "henry",
  "lumen", "lux", "becquerel", "gray", "sievert", "katal"];
for (var i=0; i<prefixes.length; i++)  
  for (var j=0; j<units.length; j++)  
    checkAvailable(prefixes[i] + units[j] + ".com", sys.puts);
   
 
function checkAvailable(name, callback)  
  var resolution = dns.resolve4(name);
  resolution.addErrback(function(e)  
    if (e.errno == dns.NXDOMAIN) callback(name);
   )
 
Out of 540 possible .com names, I found 376 that are available (and 10 more that produced temporary DNS errors, which I haven't investigated). Here are a few interesting ones, with some commentary: To get the complete list, just copy the script above to a file, and run it like this: node listnames.js Along the way I discovered that the API documentation for Node's dns module was out-of-date. This is fixed in my GitHub fork, and I've sent a pull request to the author Ryan Dahl.

29 December 2009

John Goerzen: Review: The Happiest Days of Our Lives (by Wil Wheaton)

I started to write this review last night, and went looking for Wil Wheaton s blog, where many of the stories came from, so I can link to it from my review. It was getting late, I was tired, and so I was a bit disoriented for a few seconds when I saw my own words flash up on the screen. At the time, his most recent story had excerpted my review of paper books. Wow, I thought. This never happens when I m about to review Dickens. And actually, it s never happened before, ever. I ll admit to owning a big grin when I saw that one of my favorite authors liked one of my blog posts. And Wil Wheaton is one of my favorite authors for sure. I enjoy reading others too, of course, but Wil s writing is something I can really identify with like no other. My parents were never in a London debtor s prison like Dickens were; I was never a promising medical student like A. C. Doyle. But I was, and am, a geek, and Wil Wheaton captures that more perfectly than anyone. After I read Just a Geek a few years ago, I gave it to my wife to read, claiming it would help her understand me better. I think it did. In The Happiest Days of Our Lives, Wil recounts memories of his childhood, and of more recent days. He talks of flashbacks to his elementary school days, when he and his classmates tried to have the coolest Star Wars action figures (for me: calculator watches). Or how his aunt introduced him to D&D, which reminded me of how my uncle got me interested in computers. Teaching himself D&D was an escape for the geeky kid that wasn t good at sports, as teaching myself Pascal and C was for me. Between us, the names and activities are different, but the story is the same. I particularly appreciated Wil s reflections on his teenage years. Like him, at that age, I often found myself as the youngest person in a room full of adults. Yet I was still a teenager, and like any teenager, did some things that I look back on with some embarrassment now. Wil was completely honest with himself he admitted crashing a golf cart on the Paramount studio lot, for instance, but also reminds me that he was a teenager then. He recognizes that he didn t always make the best choices and wasn t always successful with what he did, but isn t ashamed of himself either. That s helpful for me to remember; I shouldn t be unreasonably harsh on my 16-year-old self, and need to remember that I had to be a teenager too. I also identify with him as a dad. He wrote of counting the days until he could teach his boys about D&D, about passing on being a geek to his sons. I ve had a similar excitement about being able to help Jacob build his first computer. Already Jacob, who is 3, loves using the manual typewriter I cleaned up for him, and spent an hour using the adding machine I dug out on Sunday while I was watching the boys. (I regret that I didn t have time to take it apart and show him how it worked right then when he asked). And perhaps his 2nd-favorite present of Christmas was the $3.50 large-button calculator with solar cell power I got him as an impulse buy at the pharmacy the other day. He is particularly enamored with the square root button because a single press replaces all the numbers on the screen with completely different numbers! I can t find the exact passage now, but Wil wrote at one point about his transition from a career in acting to a career in writing. He said that he likes the feeling he gets when his writing can touch people. He s been able to redefine himself not as a guy that used to be an actor on Star Trek but a person that is a good author, now. I agree, and think his best work has been done with a keyboard instead of a camera. And that leaves me wondering where my career will take me. Yes, I m an author, but of technical books. Authors of technical books rarely touch people s hearts. There s a reason we read Shakespeare and Dickens in literature classes, but no high school English teacher has ever assigned Newton s Opticks, despite its incredible importance to the world. Newton revolutionized science, mathematics, and philosophy, but Opticks doesn t speak to the modern heart like Romeo and Jiuliet still does. Generations of people have learned more about the world from Shakespeare than from Newton. I don t have Wil s gift for writing such touching stories. I ve only been able to even approach that sort of thing once or twice, and it certainly won t make a career for me. Like Wil, I m rarely the youngest person in the room anymore. His days of being a famous teenage actor on a scifi series are long gone, as are mine of single-handedly defeating entire teams at jr. high programming contests. (OK, that s a stretch, but at the time it sure felt exciting.) But unlike him, I m not completely content with my niche yet. I blog about being a geek in rural Kansas, where there still aren t many. I m a dad, with an incredible family. And I write about programming, volunteer for Debian and a few other causes, and have a surprisingly satisfying job working for a company that builds lawn mowers. And yet, I have this unshakable feeling of unsettledness. That I need to stop and think more about what I really want to do with my life, perhaps cultivate some talents I don t yet have, or perhaps find a way to make my current path more meaningful. So I will take Wil s book as a challenge, to all those that were once sure of what their lives would look like, and are less sure with each passing year: take a chance, and make it yours. And on that score, perhaps I ve done more than I had realized at first. Terah and I took a big chance moving to Kansas, and another one when we bought my grandparents run-down house to fix up and live in. Perhaps it s not a bad idea to pause every few years and ask the question: Do I still like the direction I m heading? Can I change it? Wil Wheaton gives me lots to think about, in the form of easy-to-read reflections on his own life. I heartily recommend both Just a Geek and The Happiest Days of Our Lives. (And that has nothing to do with the fact that the Ubuntu machine he used to write the book probably had installed on it a few pieces of code that I wrote, I promise you.)

16 September 2009

Sergio Talens-Oliag: Encrypting a Debian GNU/Linux installation on a MacBook

A couple of weeks ago I updated my Debian Sid setup on the MacBook to use disk encryption; this post is to document what I did for later reference. The system was configured for dual booting Debian or Mac OS X using refit and grub2 as documented on the Debian Wiki; I don't use the Mac OS X system much, but I left it there to be able to test things and be able to answer questions of Mac OS X users when I have to. The Debian installation was done using two primary partitions, one for swap (I used a partition to be able to suspend to disk without troubles) and an ext3 file system used as the root file system. The plan was to use the Debian Installer to do the disk setup and recover the Sid installation from a backup once the encrypted setup was working OK. Backup for later recovery My first step was to install all the needed packages on the original system; basically I verified that I had the lvm2 and cryptsetup packages installed. The second step was to backup the root file system; to do it I changed to run level 1 and copied the files to an external USB disk using rsync. My third step was to boot into Mac OS X to reduce the space assigned to it; I had a lot of free space that I didn't plan to use with Mac OS X and I thought that this was the best occasion to reassign it to the Debian file system. Encrypted Lenny installation Now the machine was ready for the installer. As I formatted the system a couple of weeks ago I used a daily build of the Lenny Debian Installer, now that Lenny is out I would have used the official version. I booted the installer and on the partition disk step I selected the manual method; I left sda1 and sda2 as they were (the Mac OS X installation uses them) and set up sda3 and sda4 as follows: Note that I decided to put /boot on a plain ext3 partition to be able to use grub2 as the boot loader (if we put the kernel on an LVM logical volume we need to use lilo as the boot loader). Once sda4 was adjusted as LVM I entered on the LVM setup and created a LVM Volume Group (VG) with the name debian, using sda4 as the physical volume. Once the VG was defined I created a couple of Logical Volumes (LV): I left some space unallocated to be able to create LVM snapshots (I use them to do backups, I'll post about it on the next days). Once the LV were ready I finished with the LVM setup and went back to the partitioner to configure the Logical Volumes: Once both encrypted volumes were ready I entered on the Configure the encrypted volumes menu and the installer formatted the volumes for encryption and asked for the debian-root pass phrase. Back on the main partitioning menu I set up the debian-root_crypt encrypted volume: I didn't need to touch the debian-swap_crypt, it was configured automatically as swap because I choose a random encryption key. At this point I was finished with the partitioning; to finish I installed a minimal system and rebooted to try the system. As I had changed the disk layout I had to re-sync the partition tables from refit; once that was done I was able to boot from the newly installed system. Setting up suspend to disk I was using s2disk to suspend the system; to test if it still worked with the new setup I installed the uswsusp package and adjusted the resume device on the /etc/uswsusp.conf to /dev/mapper/debian-swap_crypt. After my first try I noticed that the resume step failed with the encrypted swap partition because it was using a random key, which means that the swap contents are unrecoverable after a reboot. Looking at the cryptsetup documentation I found that the solution was to use a derived key for the swap partition instead of a random one. The command sequence was as follows:
# disable swap
swapoff -a
# close encrypted volume
cryptsetup luksClose debian-swap_crypt
# change the swap partition setup on the /etc/crypttab file
sed -e -i 's%^debian-swap.*%debian-swap_crypt /dev/mapper/debian-swap debian-root_crypt cipher=aes-cbc-essiv:sha256,size=256,swap,hash=sha256,keyscript=/lib/cryptsetup/scripts/decrypt_derived,swap%' /etc/crypttab
# open the encrypted volumes with the new setup
/etc/init.d/cryptdisks start
# enable swap
swapon -a
# update the initrd image
update-initramfs -u
After executing all those commands the suspend to disk system worked as expected. Recovering the original system If I were going to reinstall the system completely I would have finished here, but in my case I wanted to recover my original system setup (except the minimal changes required to use the encrypted passions, of course). To recover my old installation I backed up some files (/etc/fstab, /etc/crypttab, /etc/uswsusp.conf and the current /boot contents to be able to boot in case of failure with my old kernel) from the current installation, after that I recovered all the files from the initial backup (except the ones just saved) using rsync again and regenerated the initrd images of my old kernels:
update-initramfs -u -k all
After that I rebooted and everything worked as on my original installation (except for the disk encryption, of course).

19 April 2009

Martin F. Krafft: Extending the X keyboard map with xkb

xmodmap has long been the only way to modify the keyboard map of the X server, short of the complex configuration daemon approaches used by the large desktop managers, like KDE and GNOME. But it has always been a hack: it modifies the X keyboard map and thus requires a baseline to work from, kind of like a patch needs the correct context to be applicable. Worse yet, xmodmap weirdness required me to invoke it twice to get the effect I wanted. When the recent upgrade to X.org 7.4 broke larger parts of my elaborate xmodmap configuration, I took the time to finally ditch xmodmap and implement my modifications as proper xkb configuration.

Background information I had tried before to use per-user xkb configuration, but could not find the answers I want. It was somewhat by chance that I found Doug Palmer s Unreliable Guide to XKB configuration at the same time that Julien Cristau and Matthew W. S. Bell provided me the necessary hints on the #xorg/irc.freenode.org IRC channel to get me started. The other resource worth mentioning is Ivan Pascal s collection of XKB documents, which were instrumental in my gaining an understanding of xkb. And just as I am writing this document, Debian s X Strike Force have published their Input Hotplug Guide, which is a nice complement to this very document you are reading right now, since it focuses on auto-configuration of xkb with HAL. The default xkb configuration comes with a lot of flexibility, and often you don t need anything else. But when you do, then this is how to do it:

Installing a new keyboard map The most basic way to install a new keyboard map is using xkbcomp, which can also be used to dump the currently installed map into a file. So, to get a bit of an idea of what we ll be dealing with, please run the following commands:
xkbcomp $DISPLAY xkb.dump
editor xkb.dump
xkbcomp xkb.dump $DISPLAY

The file is complex and large, and it completely went against my aesthetics to simply edit it to have xkb work according to my needs. I sought a way in which I could use as much as possible of the default configuration, and only place self-contained additional snippets in place to do the things I wanted done differently. setxkbmap and rule files Thus began my voyage into the domain of rule files. But before we dive into those, let s take a look at setxkbmap. Despite the trivial invocation of e.g. setxkbmap us to install a standard US-American keyboard map, the command also takes arguments. More specifically, it allows you to specify the following high-level parameters, which determine the sequence of events between key press and an application receiving a KeyPress event:
  • Model: the keyboard model, which defines which keys are where
  • Layout: the keyboard layout, which defines what the keys actually are
  • Variant: slight variantions in the layout
  • Options: configurable aspects of keyboard features and possibilities
Thus, with the following command line, I would select a US layout with international (dead) keys for my Thinkpad keyboard, and switch to an alternate symbol group with the windows keys (more on that later):
setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch

In many cases, between all combinations of the aforementioned parameters, this is all you ever need. But I wanted more. If you append -print to the above command, it will print the keymap it would install, rather than installing it:
% setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch -print
xkb_keymap  
  xkb_keycodes    include "evdev+aliases(qwerty)"        ;
  xkb_types       include "complete"     ;
  xkb_compat      include "complete"     ;
  xkb_symbols     include "pc+us(intl)+inet(evdev)+group(win_switch)"    ;
  xkb_geometry    include "thinkpad(us)"         ;
 ;

There are two things to note:
  1. The -option grp:win_switch argument has been turned into an additional include group(win_switch) on the xkb_symbols line, just like the model, layout, and variant are responsible for other aspects in the output.
  2. The output seems related to what xkbcomp dumped into the xkb.dump file we created earlier. Upon closer inspection, it turns out that the dump file is simply a pre-processed version of the keyboard map, with include instructions exploded.
At this point, it became clear to me that this was the correct way forward, and I started to investigate those points in order. The translation from parameters to an xkb_keymap stanza by setxkbmap is actually governed by a rule file. A rule is nothing more than a set of criteria, and what setxkbmap should do in case they all match. On a Debian system, you can find this file in /usr/share/X11/xkb/rules/evdev, and /usr/share/X11/xkb/rules/evdev.lst is a listing of all available parameter values. The xkb_symbols include line in the above xkb_keymap output is the result of the following rules in the first file, which setxkbmap had matched (from top to bottom) and processed:
! model         layout              =       symbols
  [...]
  *             *                   =       pc+%l(%v)
! model                             =       symbols
  *                                 =       +inet(evdev)
! option                            =       symbols
  [...]
  grp:win_switch                    =       +group(win_switch)

It should now not be hard to deduce the xkb_symbols include line quoted above, starting from the setxkbmap command line. I ll reproduce both for you for convenience:
setxkbmap -model thinkpad -layout us -variant intl -option grp:win_switch
xkb_symbols     include "pc+us(intl)+inet(evdev)+group(win_switch)"    ;

A short note about the syntax here: group(win_switch) in the symbols column simply references the xkb_symbols stanza named win_switch in the symbols file group (/usr/share/X11/xkb/symbols/group). Thus, the rules file maps parameters to sets of snippets to include, and the output of setxkbmap applies those rules to create the xkb_keymap output, to be processed by xkbcomp (which setxkbmap invokes implicitly, unless the -print argument was given on invocation). It seems that for a criteria (option, model, layout, ) to be honoured, it has to appear in the corresponding listing file, evdev.lst in this case. There is also evdev.xml, but I couldn t figure out its role.

Attaching symbols to keys I ended up creating a symbols file of reasonable size, which I won t discuss here. Instead, let s solve the following two tasks for the purpose of this document:
  1. Make the Win-Hyphen key combination generate an en dash ( ), and Win-Shift-Hyphen an em dash ( ).
  2. Let the Caps Lock key generate Mod4, which can be used e.g. to control the window manager.
To approach these two tasks, let s create a symbols file in ~/.xkb/symbols/xkbtest and add two stanzas to it:
partial alphanumeric_keys
xkb_symbols "dashes"  
  key <AE11>  
    symbols[Group2] = [ endash, emdash ]
   ;
 ;
partial modifier_keys
xkb_symbols "caps_mod4"  
  replace key <CAPS>  
    [ VoidSymbol, VoidSymbol ]
   ;
  modifier_map Mod4   <CAPS>  ;
 ;

Now let me explain these in turn:
  1. We used the option grp:win_switch earlier, which told xkb that we would like to use the windows keys to switch to group 2. In the custom symbols file, we now simply define the symbols to be generated for each key, when the second group has been selected. Key <AE11> is the hyphen key. To find out the names of all the other keys on your keyboard, you can use the following command:
    xkbprint -label name $DISPLAY -   gv -orientation=seascape -
    
    
    I had to declare the stanza partial because it is not a complete keyboard map, but can only be used to augment/modify other maps. I also declared it alphanumeric_keys to tell xkb that I would be modifying alphanumeric keys inside it. If I also wanted to change modifier keys, I would also specify modifier_keys. The rest should be straight-forward. You can get the names of available symbols from keysymdef.h (/usr/include/X11/keysymdef.h on a Debian system, package x11proto-core-dev), stripping the XK_ prefix.
  2. The second stanza replaces the Caps Lock key definition and prevents it from generating symbols (VoidSymbol). The important aspect of the second stanza is the modifier_map instruction, which causes the key to generate the Mod4 modifier event, which I can later use to bind key combinations for my window manager (awesome).
The easiest way to verify those changes is to put the setxkbmap -print output of the keyboard map you would like to use as a baseline into ~/.xkb/keymap/xkbtest, and append snippets to be included to the xkb_symbols line, e.g.:
"pc+us(intl)+inet(evdev)+group(win_switch)+xkbtest(dashes)+xkbtest(caps_mod4)"

When you try to load this keyboard map with xkbcomp, it will fail because it cannot find the xkbtest symbol definition file. You have to let the tool know where to look, by appending a path to its search list (note the use of $HOME instead of ~, which the shell would not expand):
xkbcomp -I$HOME/.xkb ~/.xkb/keymap/xkbtest $DISPLAY

You can use xev to verify the results, or just type Win-Hyphen into a terminal; does it produce ? By the way, I found xev much more useful for such purposes when invoked as follows (thanks to Penny for the idea):
xev   sed -ne '/^KeyPress/,/^$/p'

Unfortunately, xev does not give any indication of which modifier symbols are generated. I have found no other way to verify the outcome, other than to tell my window manager to do something in response to e.g. Mod4-Enter, reloaded it, and then tried it out.

Rules again, and why I did not use them in the end Once I got this far, I proceeded to add option-to-symbol-snippet mappings to the rules file, and added each option to the listing file too. A few bugs [[!debbugs 524512 desc=later]], I finally had setxkbmap spit out the right xkb_keymap and could install the new keyboard map with xkbcomp, like so:
setxkbmap -I$HOME/.xkb [...] -print   xkbcomp -I$HOME/xkb - :0

I wrote a small script to automatically do that at the start of the X session and could have gone to play outside, if it hadn t been for the itch I felt due to the entire rule file stored in my configuration. I certainly did not like that, but I could also not find a way to extend a rule file with additional rules. When I looked at the aforementioned script again, it suddenly became obvious that I was going a far longer path than I had to. Even though the rule system is powerful and allows me to e.g. automatically include symbol maps to remap keys on my Thinkpad, based on the keyboard model I configured, the benefit (if any) did not justify the additional complexity. In the end, I simplified the script that loads the keyboard map, and defined a default xkb_keymap, as well as one for the Thinkpad, wich I identify by its fully-qualified hostname. If a specific file is available for a given host, it is used. Otherwise, the script uses the default.

Next.

Previous.